Skip to content

[Feature] support v1 update/clear api for RL#6761

Open
liyonghua0910 wants to merge 6 commits intoPaddlePaddle:developfrom
liyonghua0910:develop+rl_update_weight_v1
Open

[Feature] support v1 update/clear api for RL#6761
liyonghua0910 wants to merge 6 commits intoPaddlePaddle:developfrom
liyonghua0910:develop+rl_update_weight_v1

Conversation

@liyonghua0910
Copy link
Collaborator

@liyonghua0910 liyonghua0910 commented Mar 10, 2026

Motivation

This PR upgrades the weight clearing and updating flow for RL scenarios.

The legacy control path mainly relied on shared memory to synchronize state across the engine, worker, and cache-related components. While functional, the signal path was not explicit enough, and it was difficult to trace how failed requests were handled across components. In addition, the old workflow usually cleared weights through clear_load_weight first, even though residual requests could still exist, and then relied on a manual reset_scheduler call to clean up the scheduler queue. This made the lifecycle less explicit and introduced risks of inconsistent states during asynchronous resource recycling.

The goal of this PR is to move the control flow to an explicit control-request path and replace the legacy weight clear/reload flow with the new sleep/wakeup workflow, so state transitions and troubleshooting become more straightforward.

Modifications

  • Switch weight-clear/update control signals to a dedicated ControlRequest/ControlResponse path, so each control request has its own request ID and can be traced through logs end to end.
  • Add /v1/sleep and /v1/wakeup, with tags support to specify which part of GPU memory should be offloaded or reloaded. Enable these APIs by export FD_ENABLE_V1_UPDATE_WEIGHTS=1.
  • Keep /clear_load_weight and /update_model_weight for compatibility:
    • When FD_ENABLE_V1_UPDATE_WEIGHTS=0, /clear_load_weight and /update_model_weight still rely on shared memory for control and multi-process synchronization.
    • When FD_ENABLE_V1_UPDATE_WEIGHTS=1, /clear_load_weight and /update_model_weight switch to the new control path, using the engine worker queue, engine cache queue, and FMQ for request dispatch and response collection.
  • Extend /v1/pause and /v1/resume with cache-transfer-manager coordination to support multi-level cache and KV-cache-backend scenarios.

Usage or Command

Export the following environment variable when starting server:

export FD_ENABLE_V1_UPDATE_WEIGHTS=1

Send control requests:

# Legacy-compatible endpoints
curl -i http://<IP>:<PORT>/clear_load_weight
curl -i http://<IP>:<PORT>/update_model_weight

# New control endpoints
curl -X POST http://<IP>:<PORT>/v1/sleep
curl -X POST http://<IP>:<PORT>/v1/wakeup

Accuracy Tests

  • N/A. This PR focuses on the RL weight-control path, request/control coordination, and interface changes. No model-output accuracy result is included in the current description.

Checklist

  • Add at least a tag in the PR title.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link

paddle-bot bot commented Mar 10, 2026

Thanks for your contribution!

@codecov-commenter
Copy link

codecov-commenter commented Mar 10, 2026

Codecov Report

❌ Patch coverage is 22.19680% with 340 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@9f0778f). Learn more about missing BASE report.

Files with missing lines Patch % Lines
fastdeploy/cache_manager/cache_transfer_manager.py 9.93% 136 Missing ⚠️
fastdeploy/engine/common_engine.py 12.24% 85 Missing and 1 partial ⚠️
fastdeploy/worker/gpu_model_runner.py 9.52% 38 Missing ⚠️
fastdeploy/rl/dynamic_weight_manager.py 13.95% 37 Missing ⚠️
fastdeploy/worker/worker_process.py 15.38% 7 Missing and 4 partials ⚠️
...astdeploy/inter_communicator/engine_cache_queue.py 58.33% 10 Missing ⚠️
fastdeploy/entrypoints/openai/api_server.py 77.14% 7 Missing and 1 partial ⚠️
fastdeploy/entrypoints/engine_client.py 36.36% 7 Missing ⚠️
fastdeploy/entrypoints/openai/utils.py 25.00% 3 Missing ⚠️
fastdeploy/cache_manager/prefix_cache_manager.py 0.00% 0 Missing and 2 partials ⚠️
... and 1 more
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #6761   +/-   ##
==========================================
  Coverage           ?   71.17%           
==========================================
  Files              ?      395           
  Lines              ?    54984           
  Branches           ?     8678           
==========================================
  Hits               ?    39137           
  Misses             ?    13060           
  Partials           ?     2787           
Flag Coverage Δ
GPU 71.17% <22.19%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

self.proposer.clear_mtp_cache()
self.clear_cache()
paddle.device.cuda.empty_cache()
self.is_paused = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里is_paused的状态的作用是啥呢

if app.state.dynamic_load_weight:
status_code, msg = app.state.engine_client.clear_load_weight()
return JSONResponse(content=msg, status_code=status_code)
if envs.FD_ENABLE_V1_UPDATE_WEIGHTS:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是否直接新增sleep、wakeup接口

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已经新增

if app.state.dynamic_load_weight:
status_code, msg = app.state.engine_client.update_model_weight()
return JSONResponse(content=msg, status_code=status_code)
if envs.FD_ENABLE_V1_UPDATE_WEIGHTS:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是否直接新增sleep、wakeup接口

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已经新增


self._post_init()

def _post_init(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个感觉放到接口自己维护比较好,request逻辑尽量保持通用

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的

while self.running:
try:
with self._pause_cond:
self._pause_cond.wait_for(lambda: not self.is_paused)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

output为什么需要感知pause呢?如果没有新请求,output就不会有新token要处理;对于正在处理的请求,应该要等到preempted调度后自行结束,否则可能会有中间token阻塞在输出队列里

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里忘删了

engine_cache_queue_port = self.cfg.cache_config.local_cache_queue_port
name = f"ctrl_c2e_rank{tp_rank+tp_size*dp_index}_{engine_cache_queue_port}"
self.llm_logger.info(f"Init Cache Control Output Queue: {name} (consumer)")
self._ctrl_output_queues[name] = FMQ().queue(name, "consumer")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

跟worker、cache transfer的控制通信感觉最好区分开,因为不一定所有的控制信号都会发给他俩

result = asyncio.run(self._wait_for_control_responses(control_request.request_id, 60, executors=executors))

# Resume the engine after wakeup
self._control_resume(None)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sleep和wake_up 内需要包含 pause和resume吗?是不是交由上游中控来调用,这样sleep和wake_up的语义更加明确

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里一方面是想做个兜底,另一方面是为了兼容老接口语义,调一次 clear_load_weight 和调一次 sleep 可以实现相同的效果。最好确实需要拆分一下,可以用参数控制 sleep 时是否需要隐式包含 pause

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants